Dynamic Subgradient Methods
نویسندگان
چکیده
Lagrangian relaxation is commonly used to generate bounds for mixed-integer linear programming problems. However, when the number of dualized constraints is very large (exponential in the dimension of the primal problem), explicit dualization is no longer possible. In order to reduce the dual dimension, different heuristics were proposed. They involve a separation procedure to dynamically select a restricted set of constraints to be dualized along the iterations. This relax–and–cut type approach has shown its numerical efficiency in many combinatorial problems. We show primal-dual convergence of such a strategy when using an adapted subgradient method for the dual step.
منابع مشابه
Nonsmooth analysis and subgradient methods for averaging in dynamic time warping spaces
Time series averaging in dynamic time warping (DTW) spaces has been successfully applied to improve pattern recognition systems. This article proposes and analyzes subgradient methods for the problem of finding a sample mean in DTW spaces. The class of subgradient methods generalizes existing sample mean algorithms such as DTW Barycenter Averaging (DBA). We show that DBA is a majorize-minimize ...
متن کاملDynamic Routing and Queue Management via Bundle Subgradient Methods
In this paper we propose a purely distributed dynamic network routing algorithm that simultaneously regulates queue sizes across the network. The algorithm is distributed since each node decides on its outgoing link flows based only on its own and its immediate neighbors’ information. Therefore, this routing method will be adaptive and robust to changes in the network topology, such as the node...
متن کاملA Fuzzy Gradient Method in Lagrangian Relaxation for Integer Programming Problems
A major issue in Lagrangian relaxation for integer programming problems is to maximize the dual function which is piece-wise linear, and consists of many facets. Available methods include the subgradient method, the bundle method, and the recently developed surrogate subgradient method. Each of the above methods, however, has its own limitations. Based on the insights obtained from these method...
متن کاملAn effective optimization algorithm for locally nonconvex Lipschitz functions based on mollifier subgradients
متن کامل
Stochastic Subgradient Methods
Stochastic subgradient methods play an important role in machine learning. We introduced the concepts of subgradient methods and stochastic subgradient methods in this project, discussed their convergence conditions as well as the strong and weak points against their competitors. We demonstrated the application of (stochastic) subgradient methods to machine learning with a running example of tr...
متن کامل